366 research outputs found

    How can humans understand their automated cars? HMI principles, problems and solutions

    Get PDF
    As long as vehicles do not provide full automation, the design and function of the Human Machine Interface (HMI) is crucial for ensuring that the human “driver” and the vehicle-based automated systems collaborate in a safe manner. When the driver is decoupled from active control, the design of the HMI becomes even more critical. Without mutual understanding, the two agents (human and vehicle) will fail to accurately comprehend each other’s intentions and actions. This paper proposes a set of design principles for in-vehicle HMI and reviews some current HMI designs in the light of those principles. We argue that in many respects, the current designs fall short of best practice and have the potential to confuse the driver. This can lead to a mismatch between the operation of the automation in the light of the current external situation and the driver’s awareness of how well the automation is currently handling that situation. A model to illustrate how the various principles are interrelated is proposed. Finally, recommendations are made on how, building on each principle, HMI design solutions can be adopted to address these challenges

    Assessing Graphical Robot Aids for Interactive Co-working

    Get PDF
    The shift towards more collaborative working between humans and robots increases the need for improved interfaces. Alongside robust measures to ensure safety and task performance, humans need to gain the confidence in robot co-operators to enable true collaboration. This research investigates how graphical signage can support human–robot co-working, with the intention of increased productivity. Participants are required to co-work with a KUKA iiwa lightweight manipulator on a manufacturing task. The three conditions in the experiment differ in the signage presented to the participants – signage relevant to the task, irrelevant to the task, or no signage. A change between three conditions is expected in anxiety and negative attitudes towards robots; error rate; response time; and participants’ complacency, suggested by facial expressions. In addition to understanding how graphical languages can support human–robot co-working, this study provides a basis for further collaborative research to explore human–robot co-working in more detail

    Glutamine synthetase gene expression during the regeneration of the annelid Enchytraeus japonensis

    Get PDF
    Enchytraeus japonensis is a highly regenerative oligochaete annelid that can regenerate a complete individual from a small body fragment in 4–5 days. In our previous study, we performed complementary deoxyribonucleic acid subtraction cloning to isolate genes that are upregulated during E. japonensis regeneration and identified glutamine synthetase (gs) as one of the most abundantly expressed genes during this process. In the present study, we show that the full-length sequence of E. japonensis glutamine synthetase (EjGS), which is the first reported annelid glutamine synthetase, is highly similar to other known class II glutamine synthetases. EjGS shows a 61–71% overall amino acid sequence identity with its counterparts in various other animal species, including Drosophila and mouse. We performed detailed expression analysis by in situ hybridization and reveal that strong gs expression occurs in the blastemal regions of regenerating E. japonensis soon after amputation. gs expression was detectable at the cell layer covering the wound and was found to persist in the epidermal cells during the formation and elongation of the blastema. Furthermore, in the elongated blastema, gs expression was detectable also in the presumptive regions of the brain, ventral nerve cord, and stomodeum. In the fully formed intact head, gs expression was also evident in the prostomium, brain, the anterior end of the ventral nerve cord, the epithelium of buccal and pharyngeal cavities, the pharyngeal pad, and in the esophageal appendages. In intact E. japonensis tails, gs expression was found in the growth zone in actively growing worms but not in full-grown individuals. In the nonblastemal regions of regenerating fragments and in intact worms, gs expression was also detected in the nephridia, chloragocytes, gut epithelium, epidermis, spermatids, and oocytes. These results suggest that EjGS may play roles in regeneration, nerve function, cell proliferation, nitrogenous waste excretion, macromolecule synthesis, and gametogenesis

    Language-free graphical signage improves human performance and reduces anxiety when working collaboratively with robots

    Get PDF
    As robots become more ubiquitous, and their capabilities extend, novice users will require intuitive instructional information related to their use. This is particularly important in the manufacturing sector, which is set to be transformed under Industry 4.0 by the deployment of collaborative robots in support of traditionally low-skilled, manual roles. In the first study of its kind, this paper reports how static graphical signage can improve performance and reduce anxiety in participants physically collaborating with a semi-autonomous robot. Three groups of 30 participants collaborated with a robot to perform a manufacturing-type process using graphical information that was relevant to the task, irrelevant, or absent. The results reveal that the group exposed to relevant signage was significantly more accurate in undertaking the task. Furthermore, their anxiety towards robots significantly decreased as a function of increasing accuracy. Finally, participants exposed to graphical signage showed positive emotional valence in response to successful trials. At a time when workers are concerned about the threat posed by robots to jobs, and with advances in technology requiring upskilling of the workforce, it is important to provide intuitive and supportive information to users. Whilst increasingly sophisticated technical solutions are being sought to improve communication and confidence in human-robot co-working, our findings demonstrate how simple signage can still be used as an effective tool to reduce user anxiety and increase task performance

    Jet energy measurement with the ATLAS detector in proton-proton collisions at root s=7 TeV

    Get PDF
    The jet energy scale and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of √s = 7TeV corresponding to an integrated luminosity of 38 pb-1. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0. 4 or R=0. 6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pT≄20 GeV and pseudorapidities {pipe}η{pipe}<4. 5. The jet energy systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams, exploiting the transverse momentum balance between central and forward jets in events with dijet topologies and studying systematic variations in Monte Carlo simulations. The jet energy uncertainty is less than 2. 5 % in the central calorimeter region ({pipe}η{pipe}<0. 8) for jets with 60≀pT<800 GeV, and is maximally 14 % for pT<30 GeV in the most forward region 3. 2≀{pipe}η{pipe}<4. 5. The jet energy is validated for jet transverse momenta up to 1 TeV to the level of a few percent using several in situ techniques by comparing a well-known reference such as the recoiling photon pT, the sum of the transverse momenta of tracks associated to the jet, or a system of low-pT jets recoiling against a high-pT jet. More sophisticated jet calibration schemes are presented based on calorimeter cell energy density weighting or hadronic properties of jets, aiming for an improved jet energy resolution and a reduced flavour dependence of the jet response. The systematic uncertainty of the jet energy determined from a combination of in situ techniques is consistent with the one derived from single hadron response measurements over a wide kinematic range. The nominal corrections and uncertainties are derived for isolated jets in an inclusive sample of high-pT jets. Special cases such as event topologies with close-by jets, or selections of samples with an enhanced content of jets originating from light quarks, heavy quarks or gluons are also discussed and the corresponding uncertainties are determined. © 2013 CERN for the benefit of the ATLAS collaboration

    Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector

    Get PDF
    The inclusive and dijet production cross-sections have been measured for jets containing b-hadrons (b-jets) in proton-proton collisions at a centre-of-mass energy of sqrt(s) = 7 TeV, using the ATLAS detector at the LHC. The measurements use data corresponding to an integrated luminosity of 34 pb^-1. The b-jets are identified using either a lifetime-based method, where secondary decay vertices of b-hadrons in jets are reconstructed using information from the tracking detectors, or a muon-based method where the presence of a muon is used to identify semileptonic decays of b-hadrons inside jets. The inclusive b-jet cross-section is measured as a function of transverse momentum in the range 20 < pT < 400 GeV and rapidity in the range |y| < 2.1. The bbbar-dijet cross-section is measured as a function of the dijet invariant mass in the range 110 < m_jj < 760 GeV, the azimuthal angle difference between the two jets and the angular variable chi in two dijet mass regions. The results are compared with next-to-leading-order QCD predictions. Good agreement is observed between the measured cross-sections and the predictions obtained using POWHEG + Pythia. MC@NLO + Herwig shows good agreement with the measured bbbar-dijet cross-section. However, it does not reproduce the measured inclusive cross-section well, particularly for central b-jets with large transverse momenta.Comment: 10 pages plus author list (21 pages total), 8 figures, 1 table, final version published in European Physical Journal

    Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support

    Full text link
    Computer Vision, and hence Artificial Intelligence-based extraction of information from images, has increasingly received attention over the last years, for instance in medical diagnostics. While the algorithms' complexity is a reason for their increased performance, it also leads to the "black box" problem, consequently decreasing trust towards AI. In this regard, "Explainable Artificial Intelligence" (XAI) allows to open that black box and to improve the degree of AI transparency. In this paper, we first discuss the theoretical impact of explainability on trust towards AI, followed by showcasing how the usage of XAI in a health-related setting can look like. More specifically, we show how XAI can be applied to understand why Computer Vision, based on deep learning, did or did not detect a disease (malaria) on image data (thin blood smear slide images). Furthermore, we investigate, how XAI can be used to compare the detection strategy of two different deep learning models often used for Computer Vision: Convolutional Neural Network and Multi-Layer Perceptron. Our empirical results show that i) the AI sometimes used questionable or irrelevant data features of an image to detect malaria (even if correctly predicted), and ii) that there may be significant discrepancies in how different deep learning models explain the same prediction. Our theoretical discussion highlights that XAI can support trust in Computer Vision systems, and AI systems in general, especially through an increased understandability and predictability
    • 

    corecore